12 research outputs found

    Mapping with Skysat Images

    Get PDF
    The very high-resolution space imagery now competes with some functions that were previously solved with aerial images. Several very high-resolution optical satellites with a ground sampling distance (GSD) of 1 m and smaller are currently active. Not all of these satellites take images worldwide. Nevertheless, it is not a problem to obtain up-to-date satellite images with a very high resolution. Mapping projects not only need to consider access and quality, but also cost-effectiveness. Of course, the economic framework conditions are decisive for the decision as to whether space images or very high-resolution satellite images should be used. With a total 21 SkySat satellites, low-cost satellites with very high resolution have changed the economic conditions. To keep costs and weights down, the Skysat satellites were not designed to offer the best direct geo-referencing performance, but this problem can be solved by automatic orientation in relation to existing orthoimages.In North Rhine-Westphalia, the cadastral maps must be checked at regular intervals to ensure that the buildings are complete. A test project examined whether this is possible with SkySat images. The geometric conditions and the image quality with the effective ground resolutions are investigated. Experiences from earlier publications could not be used. First the specific problem had to be solved, the resolution of the SkySat images was improved by lowering the satellite orbit altitude from 500 km to 450 km and by a better super resolution with 0.50 m ground sampling distance for the SkySat Collect orthoimages and in addition Planet improved their generation of Collect images. The required standard deviation of the object details of 4 m was achieved clearly as the effective ground resolution of 0.5 m if the angle of incidence is below 20°

    Cognitive approaches and optical multispectral data for semi-automated classification of landforms in a rugged mountainous area

    Get PDF
    This paper introduces a new open source, knowledge-based framework for automatic interpretation of remote sensing images, called InterIMAGE. This framework owns a flexible modular architecture, in which image processing operators can be associated to both root and leaf nodes of the semantic network, which constitutes a differential strategy in comparison to other object-based image analysis platforms currently available. The architecture, main features as well as an overview on the interpretation strategy implemented in InterIMAGE is presented. The paper also reports an experiment on the classification of landforms. Different geomorphometric and textural attributes obtained from ASTER/Terra images were combined with fuzzy logic and drove the interpretation semantic network. Object-based statistical agreement indices, estimated from a comparison between the classified scene and a reference map, were used to assess the classification accuracy. The InterIMAGE interpretation strategy yielded a classification result with strong agreement and proved to be effective for the extraction of landforms

    A Debiasing Variational Autoencoder for Deforestation Mapping

    Get PDF
    Deep Learning (DL) algorithms provide numerous benefits in different applications, and they usually yield successful results in scenarios with enough labeled training data and similar class proportions. However, the labeling procedure is a cost and time-consuming task. Furthermore, numerous real-world classification problems present a high level of class imbalance, as the number of samples from the classes of interest differ significantly. In various cases, such conditions tend to promote the creation of biased systems, which negatively impact their performance. Designing unbiased systems has been an active research topic, and recently some DL-based techniques have demonstrated encouraging results in that regard. In this work, we introduce an extension of the Debiasing Variational Autoencoder (DB-VAE) for semantic segmentation. The approach is based on an end-to-end DL scheme and employs the learned latent variables to adjust the individual sampling probabilities of data points during the training process. For that purpose, we adapted the original DB-VAE architecture for dense labeling in the context of deforestation mapping. Experiments were carried out on a region of the Brazilian Amazon, using Sentinel-2 data and the deforestation map from the PRODES project. The reported results show that the proposed DB-VAE approach is able to learn and identify under-represented samples, and select them more frequently in the training batches, consequently delivering superior classification metrics

    Analysis and bias improvement of height models based on satellite images

    Get PDF
    Height models are a fundamental part of the geo-information required for various applications. The determination of height models by aerial photogrammetry, LiDAR or space images is time-consuming and expensive. For height models with large area coverage, UAVs are not economic. The freely available height models ASTER GDEM-3, SRTM, AW3D30 and TDM90 can meet various requirements. With the exception of ASTER-GDEM-3, which cannot compete with the other, the digital surface models SRTM, AW3D30 and TDM90 are analyzed in detail for accuracy and morphology in 4 test sites using LiDAR reference DTMs. The accuracy figures root mean square error, standard deviation, NMAD and LE90 are compared as well as the accuracy dependence on the terrain inclination. The analysis uses a layer for the open areas, excluding forest and settlement areas. Remaining elements that do not belong to a DTM are filtered. Particular attention is paid to systematic errors. The InSAR height models SRTM and TDM90 have some accuracy and morphological restrictions in mountain and settlement areas. Even so, the direct sensor orientation of TDM90 is better than for the other. Optimal results in terms of accuracy and morphology were achieved with AW3D30 corrected by TDM90 for the local absolute height level. This correction reduces the bias and also the tilt of the height models compared to the reference LiDAR DTM

    Evaluation of semantic segmentation methods for deforestation detection in the amazon

    Get PDF
    Deforestation is a wide-reaching problem, responsible for serious environmental issues, such as biodiversity loss and global climate change. Containing approximately ten percent of all biomass on the planet and home to one tenth of the known species, the Amazon biome has faced important deforestation pressure in the last decades. Devising efficient deforestation detection methods is, therefore, key to combat illegal deforestation and to aid in the conception of public policies directed to promote sustainable development in the Amazon. In this work, we implement and evaluate a deforestation detection approach which is based on a Fully Convolutional, Deep Learning (DL) model: the DeepLabv3+. We compare the results obtained with the devised approach to those obtained with previously proposed DL-based methods (Early Fusion and Siamese Convolutional Network) using Landsat OLI-8 images acquired at different dates, covering a region of the Amazon forest. In order to evaluate the sensitivity of the methods to the amount of training data, we also evaluate them using varying training sample set sizes. The results show that all tested variants of the proposed method significantly outperform the other DL-based methods in terms of overall accuracy and F1-score. The gains in performance were even more substantial when limited amounts of samples were used in training the evaluated methods. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Domain adaptation with cyclegan for change detection in the amazon forest

    Get PDF
    Deep learning classification models require large amounts of labeled training data to perform properly, but the production of reference data for most Earth observation applications is a labor intensive, costly process. In that sense, transfer learning is an option to mitigate the demand for labeled data. In many remote sensing applications, however, the accuracy of a deep learning-based classification model trained with a specific dataset drops significantly when it is tested on a different dataset, even after fine-tuning. In general, this behavior can be credited to the domain shift phenomenon. In remote sensing applications, domain shift can be associated with changes in the environmental conditions during the acquisition of new data, variations of objects' appearances, geographical variability and different sensor properties, among other aspects. In recent years, deep learning-based domain adaptation techniques have been used to alleviate the domain shift problem. Recent improvements in domain adaptation technology rely on techniques based on Generative Adversarial Networks (GANs), such as the Cycle-Consistent Generative Adversarial Network (CycleGAN), which adapts images across different domains by learning nonlinear mapping functions between the domains. In this work, we exploit the CycleGAN approach for domain adaptation in a particular change detection application, namely, deforestation detection in the Amazon forest. Experimental results indicate that the proposed approach is capable of alleviating the effects associated with domain shift in the context of the target application. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Cooperative localisation using image sensors in a dynamic traffic scenario

    Get PDF
    Localisation is one of the key elements in navigation. Especially due to the development in automated driving, precise and reliable localisation becomes essential. In this paper, we report on different cooperation approaches in visual localisation with two vehicles driving in a convoy formation. Each vehicle is equipped with a multi-sensor platform consisting of front-facing stereo cameras and a global navigation satellite system (GNSS) receiver. In the first approach, the GNSS signals are used as excentric observations for the projection centres of the cameras in a bundle adjustment, whereas the second approach uses markers on the front vehicle as dynamic ground control points (GCPs). As the platforms are moving and data acquisition is not synchronised, we use time dependent platform poses. These time dependent poses are represented by trajectories consisting of multiple 6 Degree of Freedom (DoF) anchor points between which linear interpolation takes place. In order to investigate the developed approach experimentally, in particular the potential of dynamic GCPs, we captured data using two platforms driving on a public road at normal speed. As a baseline, we determine the localisation parameters of one platform using only data of that platform. We then compute a solution based on image and GNSS data from both platforms. In a third scenario, the front platform is used as a dynamic GCP which can be related to the trailing platform by markers observed in the images acquired by the latter. We show that both cooperative approaches lead to significant improvements in the precision of the poses of the anchor points after bundle adjustment compared to the baseline. The improvement achieved due to the inclusion of dynamic GCPs is somewhat smaller than the one due to relating the platforms by tie points. Finally, we show that for an individual vehicle, the use of dynamic GCPs can compensate for the lack of GNSS data

    Urban area extent extraction in spaceborne HR and VHR data using multi-resolution features

    No full text
    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an ―urban area‖ is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, ―urban area‖ extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. © 2014 by the authors; licensee MDPI, Basel, Switzerland

    Autonomous Sensing and Localization of a Mobile Robot for Multi-Step Additive Manufacturing in Construction

    No full text
    In contrast to stationary systems, mobile robots have an arbitrarily expandable workspace. As a result, the spatial dimensioning of the task to be mastered plays only a subordinate role and can be scaled as desired. For the construction industry in particular, which requires the handling and production of substantial components, mobile robots mean an unlimited expansion of the workspace based on their mobility levels and thus increased flexibility. The greatest challenge in mobile robotics lies with the discrepancy between the precision required for the task and the achievable positioning accuracy. External localization systems show significant potential for improvement in this respect but, in many cases, require a line of sight between the measurement system and the robot or a time-consuming calibration of markers. Therefore, this article presents an approach for an onboard localization system for use in a multi-step additive manufacturing processes for building construction. While a SLAM algorithm is used for the initial estimation of the robot's base at the work site, in a refined estimation step, the positioning accuracy is enhanced using a 2D-laser-scanner. This 2D-scanner is used to create a 3D point cloud of the 3D-printed component each time after a print job of one segment has been carried out and before continuing a print job from a new location, to enable printing of layers on top of each other with sufficient accuracy over many repositioning manouvres. When the robot returns to a position for print continuation, the initial and the new point clouds are compared using an ICP-algorithm, and the resulting transformation is used to refine the robot's pose estimation relative to the 3D-printed building component. While initial experiments demonstrate the approach's potential, transferring it to large-scale 3D-printed components presents additional challenges, highlighted in this paper

    Geomorphological change detection using object-based feature extraction from multi-temporal LIDAR data

    No full text
    Multi-temporal LiDAR DTMs are used for the development and testing of a method for geomorphological change analysis in western Austria. Our test area is located on a mountain slope in the Gargellen Valley in western Austria. Six geomorphological features were mapped by using stratified Object-Based Image Analysis (OBIA) and segmentation optimization using 1m LiDAR DTMs of 2002 and 2005. Based on the 2002 data, the scale parameter for each geomorphological feature was optimized by comparing manually digitized training samples with automatically recognized image objects. Classification rule sets were developed to extract the feature types of interest. The segmentation and classification settings were then applied to both LiDAR DTMs which allowed the detection of geomorphological change between 2002 and 2005. FROM-TO changes of geomorphological categories were calculated and linked to volumetric changes which were derived from the subtracted DTMs. Enlargement of mass movement areas at the cost of glacial eroded bedrock was detected, although most changes occurred within mass movement categories and channel incisions, as the result of material removal and/or deposition. The proposed method seems applicable for geomorphological change detection in mountain areas. In order to improve change detection results, processing errors and noise that negatively influence the segmentation accuracy need to be reduced. Despite these concerns, we conclude that stratified OBIA applied to multi-temporal LiDAR datasets is a promising tool for of geomorphological change detection
    corecore